Active Agent Oriented Multimodal Interface System

نویسندگان

  • Osamu Hasegawa
  • Katsunobu Itou
  • Takio Kurita
  • Satoru Hayamizu
  • Kazuyo Tanaka
  • Kazuhiko Yamamoto
  • Nobuyuki Otsu
چکیده

This paper presents a prototype of an interface system with an active human-like agent. In usual human communication, non-verbal expressions play important roles. They convey emotional information and control timing of interaction as well. This project attempts to introduce multi modality into computer-human interaction. Our human-like agent with its realistic facial expressions identifies the user by sight and interacts actively and individually to each user in spoken language. That is, the agent sees human and visually recognizes who is the person, keeps eye-contacts in its facial display with human, starts spoken language interaction by talking to human first.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Task-Oriented Synergistic Multimodality

Multimodal interface aims at overcoming the shortcomings of conventional human-computer communication -serialized input, excessive accuracy, and low input efficiency -by exploiting the complementary nature of multiple modalities in interpreting user intent, thus improving the efficiency and naturalness of humancomputer interaction. In this paper, we present motivations driving multimodal interf...

متن کامل

Spoken and Multimodal Bus Timetable Systems: Design, Development and Evaluation

We present three speech-based bus timetable systems with different approaches. The first system had open user-initiative dialogues aiming at natural interaction. The user tests revealed several problems with this approach, and we developed solutions from multiple perspectives to overcome the problems. The second system focuses on task-oriented system-initiative dialogues, and contains a multimo...

متن کامل

Designing a Conversational Interface for a Multimodal Smartphone Programming-by-Demonstration Agent

In this position paper, we first summarize our work on designing the conversational interface for SUGILITE – a multimodal programming by demonstration system that enables a virtual agent to learn how to handle out-ofdomain commands and perform the tasks using available third-party mobile apps in task-oriented dialogs from the user’s demonstrations. We then discuss our planned future work on ena...

متن کامل

SmartKom: Towards Multimodal Dialogues with Anthropomorphic Interface Agents

SmartKom is a multimodal dialogue system that combines speech, gesture, and facial expressions for input and output. SmartKom provides an anthropomorphic and affective user interface through its personification of an interface agent. Understanding of spontaneous speech is combined with video-based recognition of natural gestures and facial expressions. One of the major scientific goals of Smart...

متن کامل

Fusion of Children's Speech and 2D Gestures when Interacting with 3D Embodied Conversational Characters

Most of the existing multimodal prototypes enabling users to combine 2D gestures and speech are task-oriented. They help adult users to solve particular information tasks often in 2D standard Graphical User Interfaces. This paper describes the NICE HCA system which aims at demonstrating multimodal conversation between humans and embodied historical and literary characters. The target users are ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1995